Artificial Intelligence Nanodegree

Computer Vision Capstone

Project: Facial Keypoint Detection


Welcome to the final Computer Vision project in the Artificial Intelligence Nanodegree program!

In this project, you’ll combine your knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system! Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition.

There are three main parts to this project:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!


*Here's what you need to know to complete the project:

  1. In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested.

    a. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

  1. In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation.

    a. Each section where you will answer a question is preceded by a 'Question X' header.

    b. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional suggestions for enhancing the project beyond the minimum requirements. If you decide to pursue the "(Optional)" sections, you should include the code in this IPython notebook.

Your project submission will be evaluated based on your answers to each of the questions and the code implementations you provide.

Steps to Complete the Project

Each part of the notebook is further broken down into separate steps. Feel free to use the links below to navigate the notebook.

In this project you will get to explore a few of the many computer vision algorithms built into the OpenCV library. This expansive computer vision library is now almost 20 years old and still growing!

The project itself is broken down into three large parts, then even further into separate steps. Make sure to read through each step, and complete any sections that begin with '(IMPLEMENTATION)' in the header; these implementation sections may contain multiple TODOs that will be marked in code. For convenience, we provide links to each of these steps below.

Part 1 : Investigating OpenCV, pre-processing, and face detection

  • Step 0: Detect Faces Using a Haar Cascade Classifier
  • Step 1: Add Eye Detection
  • Step 2: De-noise an Image for Better Face Detection
  • Step 3: Blur an Image and Perform Edge Detection
  • Step 4: Automatically Hide the Identity of an Individual

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

  • Step 5: Create a CNN to Recognize Facial Keypoints
  • Step 6: Compile and Train the Model
  • Step 7: Visualize the Loss and Answer Questions

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

  • Step 8: Build a Robust Facial Keypoints Detector (Complete the CV Pipeline)

Step 0: Detect Faces Using a Haar Cascade Classifier

Have you ever wondered how Facebook automatically tags images with your friends' faces? Or how high-end cameras automatically find and focus on a certain person's face? Applications like these depend heavily on the machine learning task known as face detection - which is the task of automatically finding faces in images containing people.

At its root face detection is a classification problem - that is a problem of distinguishing between distinct classes of things. With face detection these distinct classes are 1) images of human faces and 2) everything else.

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the detector_architectures directory.

Import Resources

In the next python cell, we load in the required libraries for this section of the project.

In [1]:
# Import required libraries for this section

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
import math
import cv2                     # OpenCV library for computer vision
from PIL import Image
import time 

Next, we load in and display a test image for performing face detection.

Note: by default OpenCV assumes the ordering of our image's color channels are Blue, then Green, then Red. This is slightly out of order with most image types we'll use in these experiments, whose color channels are ordered Red, then Green, then Blue. In order to switch the Blue and Red channels of our test image around we will use OpenCV's cvtColor function, which you can read more about by checking out some of its documentation located here. This is a general utility function that can do other transformations too like converting a color image to grayscale, and transforming a standard color image to HSV color space.

In [2]:
# Load in color image for face detection
image = cv2.imread('images/test_image_1.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot our image using subplots to specify a size and title
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[2]:
<matplotlib.image.AxesImage at 0x7fd5208cf0b8>

There are a lot of people - and faces - in this picture. 13 faces to be exact! In the next code cell, we demonstrate how to use a Haar Cascade classifier to detect all the faces in this test image.

This face detector uses information about patterns of intensity in an image to reliably detect faces under varying light conditions. So, to use this face detector, we'll first convert the image from color to grayscale.

Then, we load in the fully trained architecture of the face detector -- found in the file haarcascade_frontalface_default.xml - and use it on our image to find faces!

To learn more about the parameters of the detector see this post.

In [3]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[3]:
<matplotlib.image.AxesImage at 0x7fd520826d68>

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.


Step 1: Add Eye Detections

There are other pre-trained detectors available that use a Haar Cascade Classifier - including full human body detectors, license plate detectors, and more. A full list of the pre-trained architectures can be found here.

To test your eye detector, we'll first read in a new test image with just a single face.

In [4]:
# Load in color image for face detection
image = cv2.imread('images/james.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot the RGB image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[4]:
<matplotlib.image.AxesImage at 0x7fd5208570f0>

Notice that even though the image is a black and white image, we have read it in as a color image and so it will still need to be converted to grayscale in order to perform the most accurate face detection.

So, the next steps will be to convert this image to grayscale, then load OpenCV's face detector and run it with parameters that detect this face accurately.

In [5]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.25, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)
Number of faces detected: 1
Out[5]:
<matplotlib.image.AxesImage at 0x7fd5208000f0>

(IMPLEMENTATION) Add an eye detector to the current face detection setup.

A Haar-cascade eye detector can be included in the same way that the face detector was and, in this first task, it will be your job to do just this.

To set up an eye detector, use the stored parameters of the eye cascade detector, called haarcascade_eye.xml, located in the detector_architectures subdirectory. In the next code cell, create your eye detector and store its detections.

A few notes before you get started:

First, make sure to give your loaded eye detector the variable name

eye_cascade

and give the list of eye regions you detect the variable name

eyes

Second, since we've already run the face detector over this image, you should only search for eyes within the rectangular face regions detected in faces. This will minimize false detections.

Lastly, once you've run your eye detector over the facial detection region, you should display the RGB image with both the face detection boxes (in red) and your eye detections (in green) to verify that everything works as expected.

In [6]:
# Make a copy of the original image to plot rectangle detections
image_with_detections = np.copy(image)   

# Loop over the detections and draw their corresponding face detection boxes
for (x,y,w,h) in faces:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h),(255,0,0), 3)  
    
# Do not change the code above this comment!

    
## TODO: Add eye detection, using haarcascade_eye.xml, to the current face detector algorithm
## Load Haar cascade into the eye_cascade variable from instrucitons above. 
eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')
## TODO: Loop over the eye detections and draw their corresponding boxes in green on image_with_detections
## Below developed with aid of OpenCV documentation -- https://docs.opencv.org/3.3.0/d7/d8b/tutorial_py_face_detection.html
for (x,y,w,h) in faces:
    roi_gray = gray[y:y+h, x:x+w] # Use for input for eye cascade. #Grayscale for enhanced shape and intensity
    roi_colour = image_with_detections[y:y+h, x:x+w] # Use to establish bound for eyes
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes: ## Notation: ex stands for eye-x, eye-y, etc.
        cv2.rectangle(roi_colour, (ex,ey), (ex+ew, ey+eh), (0,255,0),2)

        

# Plot the image with both faces and eyes detected
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face and Eye Detection')
ax1.imshow(image_with_detections)
Out[6]:
<matplotlib.image.AxesImage at 0x7fd5207ae550>

(Optional) Add face and eye detection to your laptop camera

It's time to kick it up a notch, and add face and eye detection to your laptop's camera! Afterwards, you'll be able to show off your creation like in the gif shown below - made with a completed version of the code!

Notice that not all of the detections here are perfect - and your result need not be perfect either. You should spend a small amount of time tuning the parameters of your detectors to get reasonable results, but don't hold out for perfection. If we wanted perfection we'd need to spend a ton of time tuning the parameters of each detector, cleaning up the input image frames, etc. You can think of this as more of a rapid prototype.

The next cell contains code for a wrapper function called laptop_camera_face_eye_detector that, when called, will activate your laptop's camera. You will place the relevant face and eye detection code in this wrapper function to implement face/eye detection and mark those detections on each image frame that your camera captures.

Before adding anything to the function, you can run it to get an idea of how it works - a small window should pop up showing you the live feed from your camera; you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [7]:
### Add face and eye detection to this laptop camera function 
# Make sure to draw out all faces/eyes found in each frame on the shown video feed

import cv2
import time 


## Copied face and eye detector cascades from provided snippets above
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
eye_cascade = eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')

# wrapper function for face/eye detection with your laptop camera
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep the video stream open
    while rval:
        # Most code is recycled from above. First, copy image and grayscale
        image_with_detections = np.copy(frame)
        # Convert to gray for shape and intensity detection.
        gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        #Detect faces
        faces = face_cascade.detectMultiScale(gray, 1.3, 3)
        # Like in still image, Section 1, limit search space for eyes to region containing face.
        # After all, eyes dont' just float disembodied, do they? 
        for (x,y,w,h) in faces:
            cv2.rectangle(image_with_detections,(x,y),(x+w, y+h),(255,0,0),3)
            roi_gray = gray[y:y+h, x:x+w]
            eyes = eye_cascade.detectMultiScale(roi_gray)
            for (ex,ey,ew,eh) in eyes: ## Notation: ex stands for eye-x, eye-y, etc.
                cv2.rectangle(roi_colour, (ex,ey), (ex+ew, ey+eh), (0,255,0),2)
        # Plot the image from camera with all the face and eye detections marked
        cv2.imshow("face detection activated", frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # Exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            # Make sure window closes on OSx
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
In [ ]:
# Call the laptop camera face/eye detector function above
laptop_camera_go()
# I implemented this, but I can't test it. Always crashes on my device. 

Step 2: De-noise an Image for Better Face Detection

Image quality is an important aspect of any computer vision task. Typically, when creating a set of images to train a deep learning network, significant care is taken to ensure that training images are free of visual noise or artifacts that hinder object detection. While computer vision algorithms - like a face detector - are typically trained on 'nice' data such as this, new test data doesn't always look so nice!

When applying a trained computer vision algorithm to a new piece of test data one often cleans it up first before feeding it in. This sort of cleaning - referred to as pre-processing - can include a number of cleaning phases like blurring, de-noising, color transformations, etc., and many of these tasks can be accomplished using OpenCV.

In this short subsection we explore OpenCV's noise-removal functionality to see how we can clean up a noisy image, which we then feed into our trained face detector.

Create a noisy image to work with

In the next cell, we create an artificial noisy version of the previous multi-face image. This is a little exaggerated - we don't typically get images that are this noisy - but image noise, or 'grainy-ness' in a digitial image - is a fairly common phenomenon.

In [7]:
# Load in the multi-face test image again
image = cv2.imread('images/test_image_1.jpg')

# Convert the image copy to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Make an array copy of this image
image_with_noise = np.asarray(image)

# Create noise - here we add noise sampled randomly from a Gaussian distribution: a common model for noise
noise_level = 40
noise = np.random.randn(image.shape[0],image.shape[1],image.shape[2])*noise_level

# Add this noise to the array image copy
image_with_noise = image_with_noise + noise

# Convert back to uint8 format
image_with_noise = np.asarray([np.uint8(np.clip(i,0,255)) for i in image_with_noise])

# Plot our noisy image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image')
ax1.imshow(image_with_noise)
Out[7]:
<matplotlib.image.AxesImage at 0x7fd5207d46a0>

In the context of face detection, the problem with an image like this is that - due to noise - we may miss some faces or get false detections.

In the next cell we apply the same trained OpenCV detector with the same settings as before, to see what sort of detections we get.

In [8]:
# Convert the RGB  image to grayscale
gray_noise = cv2.cvtColor(image_with_noise, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image_with_noise)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 12
Out[8]:
<matplotlib.image.AxesImage at 0x7fd52077aeb8>

With this added noise we now miss one of the faces!

(IMPLEMENTATION) De-noise this image for better face detection

Time to get your hands dirty: using OpenCV's built in color image de-noising functionality called fastNlMeansDenoisingColored - de-noise this image enough so that all the faces in the image are properly detected. Once you have cleaned the image in the next cell, use the cell that follows to run our trained face detector over the cleaned image to check out its detections.

You can find its official documentation here and a useful example here.

Note: you can keep all parameters except photo_render fixed as shown in the second link above. Play around with the value of this parameter - see how it affects the resulting cleaned image.

In [9]:
## TODO: Use OpenCV's built in color image de-noising function to clean up our noisy image!
from matplotlib import pyplot as plt

denoised_image = cv2.fastNlMeansDenoisingColored(image_with_noise, None, 60, 10, 21, 7)
denoised_image_2 = cv2.fastNlMeansDenoisingColored(image_with_noise, None, 60, 20, 21, 7)
denoised_image_3 = cv2.fastNlMeansDenoisingColored(image_with_noise, None, 60, 30, 21, 7)
denoised_image_4 = cv2.fastNlMeansDenoisingColored(image_with_noise, None, 60, 40, 21, 7)
# inputs: src, dest, photo_render, search_window, block_size
plt.subplot(2, 2, 1) # Random initial weights I set
plt.imshow(denoised_image)
plt.subplot(2, 2, 2) #Double photo render
plt.imshow(denoised_image_2)
plt.subplot(2, 2, 3) # Triple photo render
plt.imshow(denoised_image_3)
plt.subplot(2, 2, 4) # Quadruple photo render
plt.imshow(denoised_image_4)
plt.show()

#The data suggest a photo_render of 40 is optimal for this test image to denoise. 
plt.imshow(denoised_image_4)
plt.show()
In [10]:
## TODO: Run the face detector on the de-noised image to improve your detections and display the result
## Imported majority from Step 1 with slight modification
# Selected "image4. 
gray_noise = cv2.cvtColor(denoised_image_4, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))
# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(denoised_image_4)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[10]:
<matplotlib.image.AxesImage at 0x7fd4dadaa160>

Step 3: Blur an Image and Perform Edge Detection

Now that we have developed a simple pipeline for detecting faces using OpenCV - let's start playing around with a few fun things we can do with all those detected faces!

Importance of Blur in Edge Detection

Edge detection is a concept that pops up almost everywhere in computer vision applications, as edge-based features (as well as features built on top of edges) are often some of the best features for e.g., object detection and recognition problems.

Edge detection is a dimension reduction technique - by keeping only the edges of an image we get to throw away a lot of non-discriminating information. And typically the most useful kind of edge-detection is one that preserves only the important, global structures (ignoring local structures that aren't very discriminative). So removing local structures / retaining global structures is a crucial pre-processing step to performing edge detection in an image, and blurring can do just that.

Below is an animated gif showing the result of an edge-detected cat taken from Wikipedia, where the image is gradually blurred more and more prior to edge detection. When the animation begins you can't quite make out what it's a picture of, but as the animation evolves and local structures are removed via blurring the cat becomes visible in the edge-detected image.

Edge detection is a convolution performed on the image itself, and you can read about Canny edge detection on this OpenCV documentation page.

Canny edge detection

In the cell below we load in a test image, then apply Canny edge detection on it. The original image is shown on the left panel of the figure, while the edge-detected version of the image is shown on the right. Notice how the result looks very busy - there are too many little details preserved in the image before it is sent to the edge detector. When applied in computer vision applications, edge detection should preserve global structure; doing away with local structures that don't help describe what objects are in the image.

In [11]:
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

# Perform Canny edge detection
edges = cv2.Canny(gray,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[11]:
<matplotlib.image.AxesImage at 0x7fd4dade14e0>

Without first blurring the image, and removing small, local structures, a lot of irrelevant edge content gets picked up and amplified by the detector (as shown in the right panel above).

(IMPLEMENTATION) Blur the image then perform edge detection

In the next cell, you will repeat this experiment - blurring the image first to remove these local structures, so that only the important boudnary details remain in the edge-detected image.

Blur the image by using OpenCV's filter2d functionality - which is discussed in this documentation page - and use an averaging kernel of width equal to 4.

In [12]:
### TODO: Blur the test imageusing OpenCV's filter2d functionality, 

image = cv2.imread('images/fawzia.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Use an averaging kernel, and a kernel width equal to 4
kernel = np.ones((4,4),np.float32)/16 # 16 because 4x4
image_blur = cv2.filter2D(gray, -1, kernel)
## TODO: Then perform Canny edge detection and display the output
cannyEdge = cv2.Canny(image_blur,100,200) #100 and 200 default values on OpenCv doc page

#Copied from segment above 
# Dilate the image to amplify edges
edges_from_blur = cv2.dilate(cannyEdge, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image Blurred')
ax1.imshow(gray,cmap='gray') # Without cmap gray, the image is this yellow-green-blue with the 2D convolution

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges_from_blur, cmap='gray')
# Appears like the 2d convolution also introduced some strange colouring? 
Out[12]:
<matplotlib.image.AxesImage at 0x7fd4dacb9828>

Step 4: Automatically Hide the Identity of an Individual

If you film something like a documentary or reality TV, you must get permission from every individual shown on film before you can show their face, otherwise you need to blur it out - by blurring the face a lot (so much so that even the global structures are obscured)! This is also true for projects like Google's StreetView maps - an enormous collection of mapping images taken from a fleet of Google vehicles. Because it would be impossible for Google to get the permission of every single person accidentally captured in one of these images they blur out everyone's faces, the detected images must automatically blur the identity of detected people. Here's a few examples of folks caught in the camera of a Google street view vehicle.

Read in an image to perform identity detection

Let's try this out for ourselves. Use the face detection pipeline built above and what you know about using the filter2D to blur and image, and use these in tandem to hide the identity of the person in the following image - loaded in and printed in the next cell.

In [13]:
# Load in the image
image = cv2.imread('images/gus.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[13]:
<matplotlib.image.AxesImage at 0x7fd4dace2668>

(IMPLEMENTATION) Use blurring to hide the identity of an individual in an image

The idea here is to 1) automatically detect the face in this image, and then 2) blur it out! Make sure to adjust the parameters of the averaging blur filter to completely obscure this person's identity.

In [23]:
## TODO: Implement face detection
# Start with figsize parameter from cell above. 
figsize = (6,6) 
# To enhance blur magnitude, multiply each element by 20.
blur_figsize = tuple(20*x for x in figsize)
## Recycled from Execution Cell 3 
#gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# Detect the faces in image
faces = face_cascade.detectMultiScale(image, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

## TODO: Blur the bounding box around each detected face using an averaging filter and display the result
# Invoke code from Step 3, and implement within just the bounding box. 
kernel = np.ones(blur_figsize,np.float32)/(blur_figsize[0]**2)
print(figsize[0]**2)#  Square of the figure size
for (x,y,w,h) in faces:
    image_with_detections[y:y+h, x:x+w] = cv2.filter2D(image_with_detections[y:y+h, x:x+w], -1, kernel)


# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image Blurred Around Face')
ax1.imshow(image_with_detections)
Number of faces detected: 1
36
Out[23]:
<matplotlib.image.AxesImage at 0x7fd4dad71ac8>

(Optional) Build identity protection into your laptop camera

In this optional task you can add identity protection to your laptop camera, using the previously completed code where you added face detection to your laptop camera - and the task above. You should be able to get reasonable results with little parameter tuning - like the one shown in the gif below.

As with the previous video task, to make this perfect would require significant effort - so don't strive for perfection here, strive for reasonable quality.

The next cell contains code a wrapper function called laptop_camera_identity_hider that - when called - will activate your laptop's camera. You need to place the relevant face detection and blurring code developed above in this function in order to blur faces entering your laptop camera's field of view.

Before adding anything to the function you can call it to get a hang of how it works - a small window will pop up showing you the live feed from your camera, you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [ ]:
### Insert face detection and blurring code into the wrapper below to create an identity protector on your laptop!
import cv2
import time 

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        # Plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # Exit by pressing any key
            # Destroy windows
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [ ]:
# Run laptop identity hider
laptop_camera_go()

Step 5: Create a CNN to Recognize Facial Keypoints

OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. In this stage of the project you will create your own end-to-end pipeline - employing convolutional networks in keras along with OpenCV - to apply a "selfie" filter to streaming video and images.

You will start by creating and then training a convolutional network that can detect facial keypoints in a small dataset of cropped images of human faces. We then guide you towards OpenCV to expanding your detection algorithm to more general images. What are facial keypoints? Let's take a look at some examples.

Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image. They mark important areas of the face - the eyes, corners of the mouth, the nose, etc. Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.

Below we illustrate a filter that, using the results of this section, automatically places sunglasses on people in images (using the facial keypoints to place the glasses correctly on each face). Here, the facial keypoints have been colored lime green for visualization purposes.

Make a facial keypoint detector

But first things first: how can we make a facial keypoint detector? Well, at a high level, notice that facial keypoint detection is a regression problem. A single face corresponds to a set of 15 facial keypoints (a set of 15 corresponding $(x, y)$ coordinates, i.e., an output point). Because our input data are images, we can employ a convolutional neural network to recognize patterns in our images and learn how to identify these keypoint given sets of labeled data.

In order to train a regressor, we need a training set - a set of facial image / facial keypoint pairs to train on. For this we will be using this dataset from Kaggle. We've already downloaded this data and placed it in the data directory. Make sure that you have both the training and test data files. The training dataset contains several thousand $96 \times 96$ grayscale images of cropped human faces, along with each face's 15 corresponding facial keypoints (also called landmarks) that have been placed by hand, and recorded in $(x, y)$ coordinates. This wonderful resource also has a substantial testing set, which we will use in tinkering with our convolutional network.

To load in this data, run the Python cell below - notice we will load in both the training and testing sets.

The load_data function is in the included utils.py file.

In [24]:
from utils import *

# Load training set
X_train, y_train = load_data()
print("X_train.shape == {}".format(X_train.shape))
print("y_train.shape == {}; y_train.min == {:.3f}; y_train.max == {:.3f}".format(
    y_train.shape, y_train.min(), y_train.max()))

# Load testing set
X_test, _ = load_data(test=True)
print("X_test.shape == {}".format(X_test.shape))
Using TensorFlow backend.
X_train.shape == (2140, 96, 96, 1)
y_train.shape == (2140, 30); y_train.min == -0.920; y_train.max == 0.996
X_test.shape == (1783, 96, 96, 1)

The load_data function in utils.py originates from this excellent blog post, which you are strongly encouraged to read. Please take the time now to review this function. Note how the output values - that is, the coordinates of each set of facial landmarks - have been normalized to take on values in the range $[-1, 1]$, while the pixel values of each input point (a facial image) have been normalized to the range $[0,1]$.

Note: the original Kaggle dataset contains some images with several missing keypoints. For simplicity, the load_data function removes those images with missing labels from the dataset. As an optional extension, you are welcome to amend the load_data function to include the incomplete data points.

Visualize the Training Data

Execute the code cell below to visualize a subset of the training data.

In [25]:
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_train[i], y_train[i], ax)

For each training image, there are two landmarks per eyebrow (four total), three per eye (six total), four for the mouth, and one for the tip of the nose.

Review the plot_data function in utils.py to understand how the 30-dimensional training labels in y_train are mapped to facial locations, as this function will prove useful for your pipeline.

(IMPLEMENTATION) Specify the CNN Architecture

In this section, you will specify a neural network for predicting the locations of facial keypoints. Use the code cell below to specify the architecture of your neural network. We have imported some layers that you may find useful for this task, but if you need to use more Keras layers, feel free to import them in the cell.

Your network should accept a $96 \times 96$ grayscale image as input, and it should output a vector with 30 entries, corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints. If you are not sure where to start, you can find some useful starting architectures in this blog, but you are not permitted to copy any of the architectures that you find online.

In [30]:
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D, Dropout, Activation
from keras.layers import Flatten, Dense, BatchNormalization


## TODO: Specify a CNN architecture
# Your model should accept 96x96 pixel graysale images in
# It should have a fully-connected output layer with 30 values (2 for each facial keypoint)

#Initial pass is a variation of my architecture from the Dog App Project, following
# Reviewer advice to include batch normalization and elu instead of relu. 
model = Sequential()
model.add(Convolution2D(filters=16, kernel_size=2, padding='same', input_shape=(96,96,1)))
model.add(BatchNormalization(axis = -1))
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Convolution2D(filters=8, kernel_size=2, padding='same'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Convolution2D(filters=4, kernel_size=2, padding='same'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=2)) 
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(30))
model.add(BatchNormalization())
model.add(Activation('elu'))



# Summarize the model
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_6 (Conv2D)            (None, 96, 96, 16)        80        
_________________________________________________________________
batch_normalization_4 (Batch (None, 96, 96, 16)        64        
_________________________________________________________________
activation_1 (Activation)    (None, 96, 96, 16)        0         
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 48, 48, 16)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 48, 48, 8)         520       
_________________________________________________________________
batch_normalization_5 (Batch (None, 48, 48, 8)         32        
_________________________________________________________________
activation_2 (Activation)    (None, 48, 48, 8)         0         
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 24, 24, 8)         0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 24, 24, 4)         132       
_________________________________________________________________
batch_normalization_6 (Batch (None, 24, 24, 4)         16        
_________________________________________________________________
activation_3 (Activation)    (None, 24, 24, 4)         0         
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 12, 12, 4)         0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 12, 12, 4)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 576)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 30)                17310     
_________________________________________________________________
batch_normalization_7 (Batch (None, 30)                120       
_________________________________________________________________
activation_4 (Activation)    (None, 30)                0         
=================================================================
Total params: 18,274
Trainable params: 18,158
Non-trainable params: 116
_________________________________________________________________

Step 6: Compile and Train the Model

After specifying your architecture, you'll need to compile and train the model to detect facial keypoints'

(IMPLEMENTATION) Compile and Train the Model

Use the compile method to configure the learning process. Experiment with your choice of optimizer; you may have some ideas about which will work best (SGD vs. RMSprop, etc), but take the time to empirically verify your theories.

Use the fit method to train the model. Break off a validation set by setting validation_split=0.2. Save the returned History object in the history variable.

Experiment with your model to minimize the validation loss (measured as mean squared error). A very good model will achieve about 0.0015 loss (though it's possible to do even better). When you have finished training, save your model as an HDF5 file with file path my_model.h5.

In [47]:
from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

#Assess the Best Optimizer
optimizer_choices = ['sgd', 'rmsprop', 'adagrad', 'adadelta', 'adamax', 'nadam']
mse_dict = {}
for opt_key in optimizer_choices:
    print("Assessing the {} model".format(opt_key))
    #Compile and Train the Model 
    model.compile(loss='mean_squared_error', optimizer = opt_key, metrics=['mse', 'acc'])
    # Batch size = 8 to be half the size of possible features input (16 as init input)
    hist = model.fit(X_train, y_train, validation_split=0.2, epochs=20, batch_size=8, verbose=1)
    # Started with batchsize = 8, accuracy of best optimizer was 71% with validation set. Increased to 16. MSE was HORRIBLE. 
    # Testing with 20 epoch, 8 batch size
    mse_dict[opt_key] = hist.history['mean_squared_error']
Assessing the sgd model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 4s - loss: 0.1092 - mean_squared_error: 0.1092 - acc: 0.6822 - val_loss: 0.0515 - val_mean_squared_error: 0.0515 - val_acc: 0.6986
Epoch 2/20
1712/1712 [==============================] - 2s - loss: 0.1158 - mean_squared_error: 0.1158 - acc: 0.6752 - val_loss: 0.0537 - val_mean_squared_error: 0.0537 - val_acc: 0.6963
Epoch 3/20
1712/1712 [==============================] - 2s - loss: 0.0967 - mean_squared_error: 0.0967 - acc: 0.6770 - val_loss: 0.0538 - val_mean_squared_error: 0.0538 - val_acc: 0.6963
Epoch 4/20
1712/1712 [==============================] - 2s - loss: 0.1026 - mean_squared_error: 0.1026 - acc: 0.6682 - val_loss: 0.0523 - val_mean_squared_error: 0.0523 - val_acc: 0.6986
Epoch 5/20
1712/1712 [==============================] - 2s - loss: 0.1008 - mean_squared_error: 0.1008 - acc: 0.6735 - val_loss: 0.0504 - val_mean_squared_error: 0.0504 - val_acc: 0.6963
Epoch 6/20
1712/1712 [==============================] - 2s - loss: 0.1884 - mean_squared_error: 0.1884 - acc: 0.6735 - val_loss: 0.0456 - val_mean_squared_error: 0.0456 - val_acc: 0.6963
Epoch 7/20
1712/1712 [==============================] - 2s - loss: 0.0815 - mean_squared_error: 0.0815 - acc: 0.6752 - val_loss: 0.0538 - val_mean_squared_error: 0.0538 - val_acc: 0.6986
Epoch 8/20
1712/1712 [==============================] - 2s - loss: 0.0816 - mean_squared_error: 0.0816 - acc: 0.6671 - val_loss: 0.0541 - val_mean_squared_error: 0.0541 - val_acc: 0.6986
Epoch 9/20
1712/1712 [==============================] - 2s - loss: 0.1073 - mean_squared_error: 0.1073 - acc: 0.6723 - val_loss: 0.0444 - val_mean_squared_error: 0.0444 - val_acc: 0.6986
Epoch 10/20
1712/1712 [==============================] - 2s - loss: 0.0995 - mean_squared_error: 0.0995 - acc: 0.6746 - val_loss: 0.0508 - val_mean_squared_error: 0.0508 - val_acc: 0.6939
Epoch 11/20
1712/1712 [==============================] - 2s - loss: 0.1042 - mean_squared_error: 0.1042 - acc: 0.6776 - val_loss: 0.6171 - val_mean_squared_error: 0.6171 - val_acc: 0.3575
Epoch 12/20
1712/1712 [==============================] - 2s - loss: 0.2525 - mean_squared_error: 0.2525 - acc: 0.6688 - val_loss: 0.0455 - val_mean_squared_error: 0.0455 - val_acc: 0.7009
Epoch 13/20
1712/1712 [==============================] - 2s - loss: 0.1063 - mean_squared_error: 0.1063 - acc: 0.6752 - val_loss: 0.0535 - val_mean_squared_error: 0.0535 - val_acc: 0.6963
Epoch 14/20
1712/1712 [==============================] - 2s - loss: 0.0853 - mean_squared_error: 0.0853 - acc: 0.6729 - val_loss: 0.0506 - val_mean_squared_error: 0.0506 - val_acc: 0.7009
Epoch 15/20
1712/1712 [==============================] - 2s - loss: 0.1340 - mean_squared_error: 0.1340 - acc: 0.6636 - val_loss: 0.0538 - val_mean_squared_error: 0.0538 - val_acc: 0.6986
Epoch 16/20
1712/1712 [==============================] - 2s - loss: 0.1870 - mean_squared_error: 0.1870 - acc: 0.6846 - val_loss: 0.0566 - val_mean_squared_error: 0.0566 - val_acc: 0.6986
Epoch 17/20
1712/1712 [==============================] - 2s - loss: 0.1155 - mean_squared_error: 0.1155 - acc: 0.6746 - val_loss: 0.0492 - val_mean_squared_error: 0.0492 - val_acc: 0.7009
Epoch 18/20
1712/1712 [==============================] - 2s - loss: 0.1013 - mean_squared_error: 0.1013 - acc: 0.6752 - val_loss: 0.0473 - val_mean_squared_error: 0.0473 - val_acc: 0.7009
Epoch 19/20
1712/1712 [==============================] - 2s - loss: 0.0790 - mean_squared_error: 0.0790 - acc: 0.6758 - val_loss: 0.0536 - val_mean_squared_error: 0.0536 - val_acc: 0.6963
Epoch 20/20
1712/1712 [==============================] - 2s - loss: 0.1222 - mean_squared_error: 0.1222 - acc: 0.6764 - val_loss: 0.0533 - val_mean_squared_error: 0.0533 - val_acc: 0.6986
Assessing the rmsprop model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 4s - loss: 0.0961 - mean_squared_error: 0.0961 - acc: 0.6770 - val_loss: 0.0538 - val_mean_squared_error: 0.0538 - val_acc: 0.7033
Epoch 2/20
1712/1712 [==============================] - 2s - loss: 0.1071 - mean_squared_error: 0.1071 - acc: 0.6706 - val_loss: 0.0521 - val_mean_squared_error: 0.0521 - val_acc: 0.7033
Epoch 3/20
1712/1712 [==============================] - 2s - loss: 0.0762 - mean_squared_error: 0.0762 - acc: 0.6857 - val_loss: 0.0505 - val_mean_squared_error: 0.0505 - val_acc: 0.6986
Epoch 4/20
1712/1712 [==============================] - 2s - loss: 0.1011 - mean_squared_error: 0.1011 - acc: 0.6741 - val_loss: 0.0524 - val_mean_squared_error: 0.0524 - val_acc: 0.7033
Epoch 5/20
1712/1712 [==============================] - 2s - loss: 0.0841 - mean_squared_error: 0.0841 - acc: 0.6863 - val_loss: 0.0529 - val_mean_squared_error: 0.0529 - val_acc: 0.6986
Epoch 6/20
1712/1712 [==============================] - 2s - loss: 0.0716 - mean_squared_error: 0.0716 - acc: 0.6828 - val_loss: 0.0514 - val_mean_squared_error: 0.0514 - val_acc: 0.6963
Epoch 7/20
1712/1712 [==============================] - 2s - loss: 0.1057 - mean_squared_error: 0.1057 - acc: 0.6822 - val_loss: 0.0520 - val_mean_squared_error: 0.0520 - val_acc: 0.7056
Epoch 8/20
1712/1712 [==============================] - 2s - loss: 0.0857 - mean_squared_error: 0.0857 - acc: 0.6706 - val_loss: 0.0517 - val_mean_squared_error: 0.0517 - val_acc: 0.7079
Epoch 9/20
1712/1712 [==============================] - 2s - loss: 0.1383 - mean_squared_error: 0.1383 - acc: 0.6764 - val_loss: 0.0506 - val_mean_squared_error: 0.0506 - val_acc: 0.7079
Epoch 10/20
1712/1712 [==============================] - 2s - loss: 0.0881 - mean_squared_error: 0.0881 - acc: 0.6840 - val_loss: 0.0514 - val_mean_squared_error: 0.0514 - val_acc: 0.7009
Epoch 11/20
1712/1712 [==============================] - 2s - loss: 0.0719 - mean_squared_error: 0.0719 - acc: 0.6776 - val_loss: 0.0503 - val_mean_squared_error: 0.0503 - val_acc: 0.7079
Epoch 12/20
1712/1712 [==============================] - 2s - loss: 0.0876 - mean_squared_error: 0.0876 - acc: 0.6840 - val_loss: 0.0488 - val_mean_squared_error: 0.0488 - val_acc: 0.7009
Epoch 13/20
1712/1712 [==============================] - 2s - loss: 0.0967 - mean_squared_error: 0.0967 - acc: 0.6793 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7103
Epoch 14/20
1712/1712 [==============================] - 2s - loss: 0.1179 - mean_squared_error: 0.1179 - acc: 0.6811 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7103
Epoch 15/20
1712/1712 [==============================] - 2s - loss: 0.0710 - mean_squared_error: 0.0710 - acc: 0.6852 - val_loss: 0.0493 - val_mean_squared_error: 0.0493 - val_acc: 0.6963
Epoch 16/20
1712/1712 [==============================] - 2s - loss: 0.0810 - mean_squared_error: 0.0810 - acc: 0.6700 - val_loss: 0.0489 - val_mean_squared_error: 0.0489 - val_acc: 0.7056
Epoch 17/20
1712/1712 [==============================] - 2s - loss: 0.1035 - mean_squared_error: 0.1035 - acc: 0.6787 - val_loss: 0.0491 - val_mean_squared_error: 0.0491 - val_acc: 0.7056
Epoch 18/20
1712/1712 [==============================] - 2s - loss: 0.0792 - mean_squared_error: 0.0792 - acc: 0.6764 - val_loss: 0.0488 - val_mean_squared_error: 0.0488 - val_acc: 0.7033
Epoch 19/20
1712/1712 [==============================] - 2s - loss: 0.0926 - mean_squared_error: 0.0926 - acc: 0.6758 - val_loss: 0.0489 - val_mean_squared_error: 0.0489 - val_acc: 0.7009
Epoch 20/20
1712/1712 [==============================] - 2s - loss: 0.0837 - mean_squared_error: 0.0837 - acc: 0.6711 - val_loss: 0.0493 - val_mean_squared_error: 0.0493 - val_acc: 0.7033
Assessing the adagrad model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 4s - loss: 0.0931 - mean_squared_error: 0.0931 - acc: 0.6758 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7056
Epoch 2/20
1712/1712 [==============================] - 2s - loss: 0.0884 - mean_squared_error: 0.0884 - acc: 0.6624 - val_loss: 0.0464 - val_mean_squared_error: 0.0464 - val_acc: 0.7079
Epoch 3/20
1712/1712 [==============================] - 2s - loss: 0.0720 - mean_squared_error: 0.0720 - acc: 0.6811 - val_loss: 0.0459 - val_mean_squared_error: 0.0459 - val_acc: 0.7056
Epoch 4/20
1712/1712 [==============================] - 2s - loss: 0.1492 - mean_squared_error: 0.1492 - acc: 0.6764 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7033
Epoch 5/20
1712/1712 [==============================] - 2s - loss: 0.0786 - mean_squared_error: 0.0786 - acc: 0.6828 - val_loss: 0.0471 - val_mean_squared_error: 0.0471 - val_acc: 0.7079
Epoch 6/20
1712/1712 [==============================] - 2s - loss: 0.1039 - mean_squared_error: 0.1039 - acc: 0.6770 - val_loss: 0.0478 - val_mean_squared_error: 0.0478 - val_acc: 0.7079
Epoch 7/20
1712/1712 [==============================] - 2s - loss: 0.0872 - mean_squared_error: 0.0872 - acc: 0.6922 - val_loss: 0.0488 - val_mean_squared_error: 0.0488 - val_acc: 0.7056
Epoch 8/20
1712/1712 [==============================] - 2s - loss: 0.0672 - mean_squared_error: 0.0672 - acc: 0.6758 - val_loss: 0.0479 - val_mean_squared_error: 0.0479 - val_acc: 0.7056
Epoch 9/20
1712/1712 [==============================] - 2s - loss: 0.0682 - mean_squared_error: 0.0682 - acc: 0.6793 - val_loss: 0.0476 - val_mean_squared_error: 0.0476 - val_acc: 0.7079
Epoch 10/20
1712/1712 [==============================] - 2s - loss: 0.1307 - mean_squared_error: 0.1307 - acc: 0.6828 - val_loss: 0.0461 - val_mean_squared_error: 0.0461 - val_acc: 0.7056
Epoch 11/20
1712/1712 [==============================] - 2s - loss: 0.0676 - mean_squared_error: 0.0676 - acc: 0.6805 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7033
Epoch 12/20
1712/1712 [==============================] - 2s - loss: 0.0795 - mean_squared_error: 0.0795 - acc: 0.6764 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7009
Epoch 13/20
1712/1712 [==============================] - 2s - loss: 0.0867 - mean_squared_error: 0.0867 - acc: 0.6776 - val_loss: 0.0479 - val_mean_squared_error: 0.0479 - val_acc: 0.6986
Epoch 14/20
1712/1712 [==============================] - 2s - loss: 0.0681 - mean_squared_error: 0.0681 - acc: 0.6735 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7079
Epoch 15/20
1712/1712 [==============================] - 2s - loss: 0.0672 - mean_squared_error: 0.0672 - acc: 0.6939 - val_loss: 0.0481 - val_mean_squared_error: 0.0481 - val_acc: 0.7079
Epoch 16/20
1712/1712 [==============================] - 2s - loss: 0.0657 - mean_squared_error: 0.0657 - acc: 0.6852 - val_loss: 0.0481 - val_mean_squared_error: 0.0481 - val_acc: 0.7079
Epoch 17/20
1712/1712 [==============================] - 2s - loss: 0.0632 - mean_squared_error: 0.0632 - acc: 0.6717 - val_loss: 0.0467 - val_mean_squared_error: 0.0467 - val_acc: 0.7033
Epoch 18/20
1712/1712 [==============================] - 2s - loss: 0.0935 - mean_squared_error: 0.0935 - acc: 0.6735 - val_loss: 0.0476 - val_mean_squared_error: 0.0476 - val_acc: 0.7033
Epoch 19/20
1712/1712 [==============================] - 2s - loss: 0.0550 - mean_squared_error: 0.0550 - acc: 0.6939 - val_loss: 0.0487 - val_mean_squared_error: 0.0487 - val_acc: 0.7056
Epoch 20/20
1712/1712 [==============================] - 2s - loss: 0.0731 - mean_squared_error: 0.0731 - acc: 0.6828 - val_loss: 0.0476 - val_mean_squared_error: 0.0476 - val_acc: 0.7056
Assessing the adadelta model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 4s - loss: 0.0888 - mean_squared_error: 0.0888 - acc: 0.6782 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7056
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.1413 - mean_squared_error: 0.1413 - acc: 0.6735 - val_loss: 0.0476 - val_mean_squared_error: 0.0476 - val_acc: 0.6986
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0933 - mean_squared_error: 0.0933 - acc: 0.6782 - val_loss: 0.0483 - val_mean_squared_error: 0.0483 - val_acc: 0.7033
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0881 - mean_squared_error: 0.0881 - acc: 0.6857 - val_loss: 0.0475 - val_mean_squared_error: 0.0475 - val_acc: 0.7056
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0814 - mean_squared_error: 0.0814 - acc: 0.6922 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7033
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.1289 - mean_squared_error: 0.1289 - acc: 0.6770 - val_loss: 0.0469 - val_mean_squared_error: 0.0469 - val_acc: 0.7009
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.1088 - mean_squared_error: 0.1088 - acc: 0.6822 - val_loss: 0.0467 - val_mean_squared_error: 0.0467 - val_acc: 0.7079
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0721 - mean_squared_error: 0.0721 - acc: 0.6735 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7056
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.1069 - mean_squared_error: 0.1069 - acc: 0.6770 - val_loss: 0.0469 - val_mean_squared_error: 0.0469 - val_acc: 0.7033
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.1414 - mean_squared_error: 0.1414 - acc: 0.6770 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7056
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0854 - mean_squared_error: 0.0854 - acc: 0.6758 - val_loss: 0.0471 - val_mean_squared_error: 0.0471 - val_acc: 0.7056
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0725 - mean_squared_error: 0.0725 - acc: 0.6711 - val_loss: 0.0483 - val_mean_squared_error: 0.0483 - val_acc: 0.7056
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0667 - mean_squared_error: 0.0667 - acc: 0.6799 - val_loss: 0.0474 - val_mean_squared_error: 0.0474 - val_acc: 0.7079
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0610 - mean_squared_error: 0.0610 - acc: 0.6723 - val_loss: 0.0471 - val_mean_squared_error: 0.0471 - val_acc: 0.7079
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0749 - mean_squared_error: 0.0749 - acc: 0.6799 - val_loss: 0.0467 - val_mean_squared_error: 0.0467 - val_acc: 0.7079
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0669 - mean_squared_error: 0.0669 - acc: 0.6817 - val_loss: 0.0468 - val_mean_squared_error: 0.0468 - val_acc: 0.7079
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0774 - mean_squared_error: 0.0774 - acc: 0.6723 - val_loss: 0.0468 - val_mean_squared_error: 0.0468 - val_acc: 0.7079
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0636 - mean_squared_error: 0.0636 - acc: 0.6688 - val_loss: 0.0468 - val_mean_squared_error: 0.0468 - val_acc: 0.7079
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0816 - mean_squared_error: 0.0816 - acc: 0.6758 - val_loss: 0.0467 - val_mean_squared_error: 0.0467 - val_acc: 0.7009
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.1061 - mean_squared_error: 0.1061 - acc: 0.6711 - val_loss: 0.0466 - val_mean_squared_error: 0.0466 - val_acc: 0.7079
Assessing the adamax model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 4s - loss: 0.0577 - mean_squared_error: 0.0577 - acc: 0.6799 - val_loss: 0.0466 - val_mean_squared_error: 0.0466 - val_acc: 0.7056
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0730 - mean_squared_error: 0.0730 - acc: 0.6741 - val_loss: 0.0460 - val_mean_squared_error: 0.0460 - val_acc: 0.7079
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.1061 - mean_squared_error: 0.1061 - acc: 0.6717 - val_loss: 0.0459 - val_mean_squared_error: 0.0459 - val_acc: 0.7056
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0744 - mean_squared_error: 0.0744 - acc: 0.6723 - val_loss: 0.0463 - val_mean_squared_error: 0.0463 - val_acc: 0.7079
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0995 - mean_squared_error: 0.0995 - acc: 0.6723 - val_loss: 0.0445 - val_mean_squared_error: 0.0445 - val_acc: 0.7056
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0678 - mean_squared_error: 0.0678 - acc: 0.6805 - val_loss: 0.0470 - val_mean_squared_error: 0.0470 - val_acc: 0.7079
Epoch 7/20
1712/1712 [==============================] - 2s - loss: 0.1039 - mean_squared_error: 0.1039 - acc: 0.6746 - val_loss: 0.0461 - val_mean_squared_error: 0.0461 - val_acc: 0.6986
Epoch 8/20
1712/1712 [==============================] - 2s - loss: 0.0661 - mean_squared_error: 0.0661 - acc: 0.6782 - val_loss: 0.0464 - val_mean_squared_error: 0.0464 - val_acc: 0.7056
Epoch 9/20
1712/1712 [==============================] - 2s - loss: 0.0662 - mean_squared_error: 0.0662 - acc: 0.6770 - val_loss: 0.0461 - val_mean_squared_error: 0.0461 - val_acc: 0.7056
Epoch 10/20
1712/1712 [==============================] - 2s - loss: 0.0660 - mean_squared_error: 0.0660 - acc: 0.6863 - val_loss: 0.0460 - val_mean_squared_error: 0.0460 - val_acc: 0.7009
Epoch 11/20
1712/1712 [==============================] - 2s - loss: 0.1000 - mean_squared_error: 0.1000 - acc: 0.6735 - val_loss: 0.0456 - val_mean_squared_error: 0.0456 - val_acc: 0.7056
Epoch 12/20
1712/1712 [==============================] - 2s - loss: 0.0648 - mean_squared_error: 0.0648 - acc: 0.6682 - val_loss: 0.0463 - val_mean_squared_error: 0.0463 - val_acc: 0.7033
Epoch 13/20
1712/1712 [==============================] - 2s - loss: 0.0898 - mean_squared_error: 0.0898 - acc: 0.6647 - val_loss: 0.0458 - val_mean_squared_error: 0.0458 - val_acc: 0.7079
Epoch 14/20
1712/1712 [==============================] - 2s - loss: 0.0914 - mean_squared_error: 0.0914 - acc: 0.6776 - val_loss: 0.0456 - val_mean_squared_error: 0.0456 - val_acc: 0.7033
Epoch 15/20
1712/1712 [==============================] - 2s - loss: 0.0765 - mean_squared_error: 0.0765 - acc: 0.6776 - val_loss: 0.0448 - val_mean_squared_error: 0.0448 - val_acc: 0.7079
Epoch 16/20
1712/1712 [==============================] - 2s - loss: 0.0653 - mean_squared_error: 0.0653 - acc: 0.6723 - val_loss: 0.0452 - val_mean_squared_error: 0.0452 - val_acc: 0.7079
Epoch 17/20
1712/1712 [==============================] - 2s - loss: 0.0663 - mean_squared_error: 0.0663 - acc: 0.6852 - val_loss: 0.0453 - val_mean_squared_error: 0.0453 - val_acc: 0.7056
Epoch 18/20
1712/1712 [==============================] - 2s - loss: 0.1109 - mean_squared_error: 0.1109 - acc: 0.6682 - val_loss: 0.0444 - val_mean_squared_error: 0.0444 - val_acc: 0.7033
Epoch 19/20
1712/1712 [==============================] - 2s - loss: 0.0666 - mean_squared_error: 0.0666 - acc: 0.6711 - val_loss: 0.0453 - val_mean_squared_error: 0.0453 - val_acc: 0.6986
Epoch 20/20
1712/1712 [==============================] - 2s - loss: 0.0614 - mean_squared_error: 0.0614 - acc: 0.6741 - val_loss: 0.0447 - val_mean_squared_error: 0.0447 - val_acc: 0.7079
Assessing the nadam model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.0765 - mean_squared_error: 0.0765 - acc: 0.6729 - val_loss: 0.0467 - val_mean_squared_error: 0.0467 - val_acc: 0.6986
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0558 - mean_squared_error: 0.0558 - acc: 0.6752 - val_loss: 0.0414 - val_mean_squared_error: 0.0414 - val_acc: 0.7033
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0594 - mean_squared_error: 0.0594 - acc: 0.6828 - val_loss: 0.0456 - val_mean_squared_error: 0.0456 - val_acc: 0.7033
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0666 - mean_squared_error: 0.0666 - acc: 0.6729 - val_loss: 0.0410 - val_mean_squared_error: 0.0410 - val_acc: 0.7056
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0608 - mean_squared_error: 0.0608 - acc: 0.6817 - val_loss: 0.0471 - val_mean_squared_error: 0.0471 - val_acc: 0.6963
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0644 - mean_squared_error: 0.0644 - acc: 0.6671 - val_loss: 0.0461 - val_mean_squared_error: 0.0461 - val_acc: 0.7033
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0664 - mean_squared_error: 0.0664 - acc: 0.6770 - val_loss: 0.0382 - val_mean_squared_error: 0.0382 - val_acc: 0.7009
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0793 - mean_squared_error: 0.0793 - acc: 0.6782 - val_loss: 0.0388 - val_mean_squared_error: 0.0388 - val_acc: 0.6963
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0679 - mean_squared_error: 0.0679 - acc: 0.6636 - val_loss: 0.0417 - val_mean_squared_error: 0.0417 - val_acc: 0.6986
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0578 - mean_squared_error: 0.0578 - acc: 0.6729 - val_loss: 0.0423 - val_mean_squared_error: 0.0423 - val_acc: 0.7009
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0745 - mean_squared_error: 0.0745 - acc: 0.6688 - val_loss: 0.0373 - val_mean_squared_error: 0.0373 - val_acc: 0.7079
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0520 - mean_squared_error: 0.0520 - acc: 0.6560 - val_loss: 0.0390 - val_mean_squared_error: 0.0390 - val_acc: 0.7056
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0623 - mean_squared_error: 0.0623 - acc: 0.6694 - val_loss: 0.0342 - val_mean_squared_error: 0.0342 - val_acc: 0.6893
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0595 - mean_squared_error: 0.0595 - acc: 0.6746 - val_loss: 0.0390 - val_mean_squared_error: 0.0390 - val_acc: 0.6986
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0575 - mean_squared_error: 0.0575 - acc: 0.6595 - val_loss: 0.0402 - val_mean_squared_error: 0.0402 - val_acc: 0.7009
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0507 - mean_squared_error: 0.0507 - acc: 0.6530 - val_loss: 0.0361 - val_mean_squared_error: 0.0361 - val_acc: 0.7079
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0503 - mean_squared_error: 0.0503 - acc: 0.6542 - val_loss: 0.0383 - val_mean_squared_error: 0.0383 - val_acc: 0.7056
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0509 - mean_squared_error: 0.0509 - acc: 0.6641 - val_loss: 0.0360 - val_mean_squared_error: 0.0360 - val_acc: 0.7009
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0394 - mean_squared_error: 0.0394 - acc: 0.6764 - val_loss: 0.0361 - val_mean_squared_error: 0.0361 - val_acc: 0.6986
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0514 - mean_squared_error: 0.0514 - acc: 0.6729 - val_loss: 0.0336 - val_mean_squared_error: 0.0336 - val_acc: 0.6986
In [48]:
# Plot / Visualize the Best Model and save it 
legend_optimizers = []
plt.figure(figsize=(20,20))
for opt_key, opt_value in mse_dict.items(): #items make the dict iterable
    legend_optimizers.append(opt_key)
    plt.plot(opt_value)
plt.title('Mean Squared Error of Optimizers')
plt.xlabel('epochs')
plt.ylabel('mse')
plt.legend(legend_optimizers, loc='upper right', fontsize= 20)
plt.show()
    
In [53]:
## TODO: Save the model as model.h5

model.compile(loss='mean_squared_error', optimizer = 'nadam', metrics=['mse', 'acc'])
# Batch size = 8 to be half the size of possible features input (16 as init input)
hist = model.fit(X_train, y_train, validation_split=0.2, epochs=20, batch_size=8, verbose=1)
model.save('my_model.h5')
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.0356 - mean_squared_error: 0.0356 - acc: 0.6711 - val_loss: 0.0295 - val_mean_squared_error: 0.0295 - val_acc: 0.7056
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0450 - mean_squared_error: 0.0450 - acc: 0.6630 - val_loss: 0.0314 - val_mean_squared_error: 0.0314 - val_acc: 0.7126
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0453 - mean_squared_error: 0.0453 - acc: 0.6735 - val_loss: 0.0292 - val_mean_squared_error: 0.0292 - val_acc: 0.7079
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0384 - mean_squared_error: 0.0384 - acc: 0.6729 - val_loss: 0.0284 - val_mean_squared_error: 0.0284 - val_acc: 0.7056
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0406 - mean_squared_error: 0.0406 - acc: 0.6636 - val_loss: 0.0283 - val_mean_squared_error: 0.0283 - val_acc: 0.7173
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0544 - mean_squared_error: 0.0544 - acc: 0.6612 - val_loss: 0.0281 - val_mean_squared_error: 0.0281 - val_acc: 0.7103
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0562 - mean_squared_error: 0.0562 - acc: 0.6776 - val_loss: 0.0288 - val_mean_squared_error: 0.0288 - val_acc: 0.7009
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0457 - mean_squared_error: 0.0457 - acc: 0.6659 - val_loss: 0.0291 - val_mean_squared_error: 0.0291 - val_acc: 0.7033
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0345 - mean_squared_error: 0.0345 - acc: 0.6647 - val_loss: 0.0272 - val_mean_squared_error: 0.0272 - val_acc: 0.6986
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0355 - mean_squared_error: 0.0355 - acc: 0.6817 - val_loss: 0.0283 - val_mean_squared_error: 0.0283 - val_acc: 0.7009
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0510 - mean_squared_error: 0.0510 - acc: 0.6641 - val_loss: 0.0273 - val_mean_squared_error: 0.0273 - val_acc: 0.7009
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0315 - mean_squared_error: 0.0315 - acc: 0.6770 - val_loss: 0.0257 - val_mean_squared_error: 0.0257 - val_acc: 0.6986
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0346 - mean_squared_error: 0.0346 - acc: 0.6939 - val_loss: 0.0253 - val_mean_squared_error: 0.0253 - val_acc: 0.7079
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0313 - mean_squared_error: 0.0313 - acc: 0.6752 - val_loss: 0.0249 - val_mean_squared_error: 0.0249 - val_acc: 0.7009
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0289 - mean_squared_error: 0.0289 - acc: 0.6893 - val_loss: 0.0243 - val_mean_squared_error: 0.0243 - val_acc: 0.7173
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0346 - mean_squared_error: 0.0346 - acc: 0.6852 - val_loss: 0.0238 - val_mean_squared_error: 0.0238 - val_acc: 0.7056
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0368 - mean_squared_error: 0.0368 - acc: 0.6881 - val_loss: 0.0231 - val_mean_squared_error: 0.0231 - val_acc: 0.7009
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0258 - mean_squared_error: 0.0258 - acc: 0.6928 - val_loss: 0.0223 - val_mean_squared_error: 0.0223 - val_acc: 0.7103
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0238 - mean_squared_error: 0.0238 - acc: 0.7085 - val_loss: 0.0200 - val_mean_squared_error: 0.0200 - val_acc: 0.6986
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0230 - mean_squared_error: 0.0230 - acc: 0.7039 - val_loss: 0.0166 - val_mean_squared_error: 0.0166 - val_acc: 0.7033

Step 7: Visualize the Loss and Test Predictions

(IMPLEMENTATION) Answer a few questions and visualize the loss

Question 1: Outline the steps you took to get to your final neural network architecture and your reasoning at each step.

Answer: Given the similar nature of image recognition and facial recognition, I modified my network architecture from the dog app assignment. I introduced a few changes after researching CNN optimization for facial recognition, which suggested that Batch Normalization be implemented between the definition of a convolutional layer and the activation function. (Attempt 1) I included three convolutional2D layers that subsequentially reduced the size of nodes in their layer. I began with 16 features, and reduced the features in the subsequent layers by a factor of 2. I took advice of my previous reviewers to incorporate an "elu" activation function because the literature suggest that ELU activation functions simultaneously improve accuracy of image classification and reduce training time. Two essential parameters that played a key role in maximizing validation accuracy and minimizing mean square error included the number of training epochs and the batch size. Given that my first layer was 16 nodes, I initially tested with 10 epochs, and a batch size of 8. This resulted in a convergence to a very small MSE around 0.0025, but a validation accuracy that ended at 71 with my best optimzier. However, I observed that the slopes at the end of their graph continued to be positively increasing for accuracy and decreasing for loss.(Attempt 2) Before acting on this intuition, I explored what would happen if I kept epochs constant, but doubled batch size to 16, equal to the number of features in the first layer. This was a disaster, causing MSE to oscillate widely, suggesting network instability and possible overfitting. (Attempt 3) I then tested 20 epochs and a batch size of 8. Although this did not change validation accuracy, it did reduce the difference between loss and mse between the training and test set. Unfortunately, 20 may have been too high, as the graph of accuracy now suggests overfitting may likely have occurred, due to the training set outperforming the test set by the 17th epoch. (Attempt 4) In Question 3, Answer 3, I responded that my intuition believed I had accidentally “bottle-necked” the capacity of my network to extract relevant features in an image because I reduced the size of the available features by half. In other words, if there were 16 features, my second convolutional layer was 8, and the final convolutional layer was 4. I instead chose to perform the opposite operation, doubling the number of filters or detectable “features” until I flattened them in a fully connected, dense layer. Given my limited time constraint, this yields the best performing network I could dewvelop, with a validation accuracy of 82%

Question 2: Defend your choice of optimizer. Which optimizers did you test, and how did you determine which worked best?

Answer: I tested all optimizers ('sgd', 'rmsprop', 'adagrad', 'adadelta', 'adamax', 'nadam') automatically present in the keras import statement, and saved the resulting mean squared error in a dictionary. I then plotted them, and selected the model that yields the minimum mean squared error as my criterion for the "best" optimizer. Initially, in my first architecture and three attempts at accuracy optimization, it appeared clear that "nadams" optimizer continually performed the best. However, after my fourth attempt at optimization, which included a change in the underlying architecture, the "adamax" optimizer emerged victorious. Among all the tested optimizers, adamax possessed both the minimum mean squared error and the most "stable" performance because it's mean squared error value did not drastically oscillate over epochs. Details, and performance plot for the successful model with an adamax optimizer are two to three cells below for "model2."

Use the code cell below to plot the training and validation loss of your neural network. You may find this resource useful.

In [54]:
## TODO: Visualize the training and validation loss of your neural network
## Implemented the luxury via http://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/
def plot_history(hist):
    print(hist.history.keys())
    plt.plot(hist.history['acc'])
    plt.plot(hist.history['val_acc'])
    plt.title('model accuracy')
    plt.xlabel('epochs')
    plt.ylabel('accuracy')
    plt.legend(['train', 'test'], loc='upper left', fontsize=15)
    plt.show()
    plt.plot(hist.history['loss'])
    plt.plot(hist.history['val_loss'])
    plt.title('model loss')
    plt.xlabel('epochs')
    plt.ylabel('loss')
    plt.legend(['train', 'test'], loc='upper left', fontsize=15)
    plt.show()
    plt.plot(hist.history['mean_squared_error'])
    plt.plot(hist.history['val_mean_squared_error'])
    plt.title('mean squared error')
    plt.xlabel('epochs')
    plt.ylabel('mean squared error')
    plt.legend(['train', 'test'], loc='upper left', fontsize=15)
    plt.show()
plot_history(hist)
# Turned this into a function because I want to keep my graphs and include the newer iterations below as cells. 
dict_keys(['val_loss', 'loss', 'val_acc', 'acc', 'val_mean_squared_error', 'mean_squared_error'])

Question 3: Do you notice any evidence of overfitting or underfitting in the above plot? If so, what steps have you taken to improve your model? Note that slight overfitting or underfitting will not hurt your chances of a successful submission, as long as you have attempted some solutions towards improving your model (such as regularization, dropout, increased/decreased number of layers, etc).

Answer: After examining the sequences above, my model does not overfit, but it does unfortunately appear to underfit. The first model seems to cap out at 71 percent with the nadam optimizer. In the example below, I am experimenting with switching my ELU activation function to RELU activation functions and observing how this affects performance. After my first three attempts at optimizing this network architecture, I observe that roughly 15 to 17 epochs may be optimal for my specific architecture and the data coming in. Below, I include my fourth attempt at improving the network architecture by changing the activation functions to relu. My intuition is that I may have performed a sort of "dimensionality reduction" on the available features my network could observe in the training data, becuase in each subsequent layer I shrank the size or number of filters present. I now, instead, double the number of features while keeping everything else the same. This successfully led to an optimization that brings my accuracy up to roughly 82 percent on the test set, which I am satisfied with. In the final model, (Model 2) shown at the end, we observe that across the evolution of 20 epochs, the test accuracy constantly outperforms the training accuracy, suggesting sufficient generalizability of the network to recognize features that are similar to, but not exactly those observed in the training data. In other words, my final architecture seems to generalize well. At first glance, it may be auspicious that both my loss and my mean squared error for the test set are also MUCH smaller in magnitude than the test set. This could be an indicator that my model generalizes well, especially because of the fact that while my training set is large, my test / validation set is rather small ( 20 % of the original data). To extend this critique a little further, my loss and mean squared error for the test set may only appear artificially small in magnitude because the test data set is smaller in size. Perhaps my network only performs successfully on a small subset of the total images, and those "precocious images" may have just happened to appear in my test data set.

In [59]:
# Here we go again, Attempt 4 to optimize everything! 
from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D, Dropout, Activation
from keras.layers import Flatten, Dense, BatchNormalization


## TODO: Specify a CNN architecture
# Your model should accept 96x96 pixel graysale images in
# It should have a fully-connected output layer with 30 values (2 for each facial keypoint)

#Initial pass is a variation of my architecture from the Dog App Project, following
# Reviewer advice to include batch normalization and elu instead of relu. 
model2 = Sequential()
model2.add(Convolution2D(filters=16, kernel_size=2, padding='same', input_shape=(96,96,1)))
model2.add(BatchNormalization(axis = -1))
model2.add(Activation('elu'))
model2.add(MaxPooling2D(pool_size=2))
model2.add(Convolution2D(filters=32, kernel_size=2, padding='same'))
model2.add(BatchNormalization())
model2.add(Activation('elu'))
model2.add(MaxPooling2D(pool_size=2))
model2.add(Convolution2D(filters=64, kernel_size=2, padding='same'))
model2.add(BatchNormalization())
model2.add(Activation('elu'))
model2.add(MaxPooling2D(pool_size=2)) 
model2.add(Dropout(0.5))
model2.add(Flatten())
model2.add(Dense(30))
model2.add(BatchNormalization())
model2.add(Activation('elu'))
# Summarize the model
model2.summary()

#Assess the Best Optimizer
optimizer_choices = ['sgd', 'rmsprop', 'adagrad', 'adadelta', 'adamax', 'nadam']
mse_dict2 = {}
for opt_key in optimizer_choices:
    print("Assessing the {} model".format(opt_key))
    #Compile and Train the Model 
    model2.compile(loss='mean_squared_error', optimizer = opt_key, metrics=['mse', 'acc'])
    # Batch size = 8 to be half the size of possible features input (16 as init input)
    hist2 = model2.fit(X_train, y_train, validation_split=0.2, epochs=20, batch_size=8, verbose=1)
    # Started with batchsize = 8, accuracy of best optimizer was 71% with validation set. Increased to 16. MSE was HORRIBLE. 
    # Testing with 20 epoch, 8 batch size
    mse_dict2[opt_key] = hist2.history['mean_squared_error']
    
# Plot / Visualize the Best Model
legend_optimizers = []
plt.figure(figsize=(20,20))
for opt_key, opt_value in mse_dict2.items(): #items make the dict iterable
    legend_optimizers.append(opt_key)
    plt.plot(opt_value)
plt.title('Mean Squared Error of Optimizers')
plt.xlabel('epochs')
plt.ylabel('mse')
plt.legend(legend_optimizers, loc='upper right', fontsize= 20)
plt.show()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_15 (Conv2D)           (None, 96, 96, 16)        80        
_________________________________________________________________
batch_normalization_16 (Batc (None, 96, 96, 16)        64        
_________________________________________________________________
activation_13 (Activation)   (None, 96, 96, 16)        0         
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 48, 48, 16)        0         
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 48, 48, 32)        2080      
_________________________________________________________________
batch_normalization_17 (Batc (None, 48, 48, 32)        128       
_________________________________________________________________
activation_14 (Activation)   (None, 48, 48, 32)        0         
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 24, 24, 32)        0         
_________________________________________________________________
conv2d_17 (Conv2D)           (None, 24, 24, 64)        8256      
_________________________________________________________________
batch_normalization_18 (Batc (None, 24, 24, 64)        256       
_________________________________________________________________
activation_15 (Activation)   (None, 24, 24, 64)        0         
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 12, 12, 64)        0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 12, 12, 64)        0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_5 (Dense)              (None, 30)                276510    
_________________________________________________________________
batch_normalization_19 (Batc (None, 30)                120       
_________________________________________________________________
activation_16 (Activation)   (None, 30)                0         
=================================================================
Total params: 287,494
Trainable params: 287,210
Non-trainable params: 284
_________________________________________________________________
Assessing the sgd model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.7111 - mean_squared_error: 0.7111 - acc: 0.0315 - val_loss: 0.6292 - val_mean_squared_error: 0.6292 - val_acc: 0.1192
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.5673 - mean_squared_error: 0.5673 - acc: 0.0555 - val_loss: 0.4617 - val_mean_squared_error: 0.4617 - val_acc: 0.1355
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.4636 - mean_squared_error: 0.4636 - acc: 0.0736 - val_loss: 0.3511 - val_mean_squared_error: 0.3511 - val_acc: 0.1145
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.3906 - mean_squared_error: 0.3906 - acc: 0.0829 - val_loss: 0.3092 - val_mean_squared_error: 0.3092 - val_acc: 0.1332
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.3276 - mean_squared_error: 0.3276 - acc: 0.1110 - val_loss: 0.2761 - val_mean_squared_error: 0.2761 - val_acc: 0.1519
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.2901 - mean_squared_error: 0.2901 - acc: 0.1338 - val_loss: 0.2406 - val_mean_squared_error: 0.2406 - val_acc: 0.2991
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.2531 - mean_squared_error: 0.2531 - acc: 0.1583 - val_loss: 0.2122 - val_mean_squared_error: 0.2122 - val_acc: 0.2967
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.2267 - mean_squared_error: 0.2267 - acc: 0.1904 - val_loss: 0.1945 - val_mean_squared_error: 0.1945 - val_acc: 0.3154
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.2046 - mean_squared_error: 0.2046 - acc: 0.1963 - val_loss: 0.1722 - val_mean_squared_error: 0.1722 - val_acc: 0.3364
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.1853 - mean_squared_error: 0.1853 - acc: 0.2079 - val_loss: 0.1620 - val_mean_squared_error: 0.1620 - val_acc: 0.3505
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.1675 - mean_squared_error: 0.1675 - acc: 0.2319 - val_loss: 0.1436 - val_mean_squared_error: 0.1436 - val_acc: 0.4065
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.1538 - mean_squared_error: 0.1538 - acc: 0.2593 - val_loss: 0.1333 - val_mean_squared_error: 0.1333 - val_acc: 0.4206
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.1378 - mean_squared_error: 0.1378 - acc: 0.2605 - val_loss: 0.1322 - val_mean_squared_error: 0.1322 - val_acc: 0.4182
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.1263 - mean_squared_error: 0.1263 - acc: 0.2874 - val_loss: 0.1105 - val_mean_squared_error: 0.1105 - val_acc: 0.4463
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.1132 - mean_squared_error: 0.1132 - acc: 0.3236 - val_loss: 0.0937 - val_mean_squared_error: 0.0937 - val_acc: 0.4813
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.1018 - mean_squared_error: 0.1018 - acc: 0.3189 - val_loss: 0.1016 - val_mean_squared_error: 0.1016 - val_acc: 0.4556
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0907 - mean_squared_error: 0.0907 - acc: 0.3376 - val_loss: 0.0916 - val_mean_squared_error: 0.0916 - val_acc: 0.4720
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0807 - mean_squared_error: 0.0807 - acc: 0.3686 - val_loss: 0.0732 - val_mean_squared_error: 0.0732 - val_acc: 0.4907
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0715 - mean_squared_error: 0.0715 - acc: 0.3680 - val_loss: 0.0653 - val_mean_squared_error: 0.0653 - val_acc: 0.4953
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0625 - mean_squared_error: 0.0625 - acc: 0.4030 - val_loss: 0.0558 - val_mean_squared_error: 0.0558 - val_acc: 0.5327
Assessing the rmsprop model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.0234 - mean_squared_error: 0.0234 - acc: 0.5771 - val_loss: 0.0046 - val_mean_squared_error: 0.0046 - val_acc: 0.7290
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0025 - mean_squared_error: 0.0025 - acc: 0.7366 - val_loss: 0.0018 - val_mean_squared_error: 0.0018 - val_acc: 0.7150
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0020 - mean_squared_error: 0.0020 - acc: 0.7436 - val_loss: 0.0015 - val_mean_squared_error: 0.0015 - val_acc: 0.7757
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0019 - mean_squared_error: 0.0019 - acc: 0.7570 - val_loss: 0.0014 - val_mean_squared_error: 0.0014 - val_acc: 0.7804
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0018 - mean_squared_error: 0.0018 - acc: 0.7512 - val_loss: 0.0014 - val_mean_squared_error: 0.0014 - val_acc: 0.7991
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0017 - mean_squared_error: 0.0017 - acc: 0.7617 - val_loss: 0.0013 - val_mean_squared_error: 0.0013 - val_acc: 0.7780
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0016 - mean_squared_error: 0.0016 - acc: 0.7681 - val_loss: 0.0012 - val_mean_squared_error: 0.0012 - val_acc: 0.7967
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0017 - mean_squared_error: 0.0017 - acc: 0.7687 - val_loss: 0.0013 - val_mean_squared_error: 0.0013 - val_acc: 0.7897
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0016 - mean_squared_error: 0.0016 - acc: 0.7652 - val_loss: 0.0011 - val_mean_squared_error: 0.0011 - val_acc: 0.8107
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0016 - mean_squared_error: 0.0016 - acc: 0.7704 - val_loss: 0.0011 - val_mean_squared_error: 0.0011 - val_acc: 0.8107
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7769 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.7991
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7664 - val_loss: 0.0011 - val_mean_squared_error: 0.0011 - val_acc: 0.8178
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7710 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.8014
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7792 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.7850
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7780 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.8014
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7839 - val_loss: 0.0011 - val_mean_squared_error: 0.0011 - val_acc: 0.7944
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7856 - val_loss: 9.8773e-04 - val_mean_squared_error: 9.8773e-04 - val_acc: 0.8131
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7897 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.8154
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7979 - val_loss: 9.8839e-04 - val_mean_squared_error: 9.8839e-04 - val_acc: 0.8248
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7886 - val_loss: 9.7663e-04 - val_mean_squared_error: 9.7663e-04 - val_acc: 0.7991
Assessing the adagrad model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.0015 - mean_squared_error: 0.0015 - acc: 0.7769 - val_loss: 9.4274e-04 - val_mean_squared_error: 9.4274e-04 - val_acc: 0.8201
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7891 - val_loss: 9.3713e-04 - val_mean_squared_error: 9.3713e-04 - val_acc: 0.7967
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7775 - val_loss: 9.1278e-04 - val_mean_squared_error: 9.1278e-04 - val_acc: 0.8248
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7996 - val_loss: 9.1571e-04 - val_mean_squared_error: 9.1571e-04 - val_acc: 0.8294
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8061 - val_loss: 8.9042e-04 - val_mean_squared_error: 8.9042e-04 - val_acc: 0.8271
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8049 - val_loss: 8.9895e-04 - val_mean_squared_error: 8.9895e-04 - val_acc: 0.8294
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7991 - val_loss: 8.8065e-04 - val_mean_squared_error: 8.8065e-04 - val_acc: 0.8224
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8072 - val_loss: 8.9397e-04 - val_mean_squared_error: 8.9397e-04 - val_acc: 0.8271
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7973 - val_loss: 8.9430e-04 - val_mean_squared_error: 8.9430e-04 - val_acc: 0.8178
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7979 - val_loss: 8.8659e-04 - val_mean_squared_error: 8.8659e-04 - val_acc: 0.8318
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8072 - val_loss: 8.8311e-04 - val_mean_squared_error: 8.8311e-04 - val_acc: 0.8341
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8084 - val_loss: 8.8660e-04 - val_mean_squared_error: 8.8660e-04 - val_acc: 0.8178
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7967 - val_loss: 8.7668e-04 - val_mean_squared_error: 8.7668e-04 - val_acc: 0.8364
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8014 - val_loss: 8.7991e-04 - val_mean_squared_error: 8.7991e-04 - val_acc: 0.8271
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8125 - val_loss: 8.7821e-04 - val_mean_squared_error: 8.7821e-04 - val_acc: 0.8248
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8061 - val_loss: 8.8161e-04 - val_mean_squared_error: 8.8161e-04 - val_acc: 0.8224
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7956 - val_loss: 8.7642e-04 - val_mean_squared_error: 8.7642e-04 - val_acc: 0.8248
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8078 - val_loss: 8.8317e-04 - val_mean_squared_error: 8.8317e-04 - val_acc: 0.8201
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8178 - val_loss: 8.9091e-04 - val_mean_squared_error: 8.9091e-04 - val_acc: 0.8154
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7950 - val_loss: 8.7859e-04 - val_mean_squared_error: 8.7859e-04 - val_acc: 0.8248
Assessing the adadelta model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 6s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7938 - val_loss: 8.7902e-04 - val_mean_squared_error: 8.7902e-04 - val_acc: 0.8248
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8137 - val_loss: 8.7607e-04 - val_mean_squared_error: 8.7607e-04 - val_acc: 0.8248
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8084 - val_loss: 8.7670e-04 - val_mean_squared_error: 8.7670e-04 - val_acc: 0.8248
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8037 - val_loss: 8.7494e-04 - val_mean_squared_error: 8.7494e-04 - val_acc: 0.8318
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8107 - val_loss: 8.7633e-04 - val_mean_squared_error: 8.7633e-04 - val_acc: 0.8224
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8002 - val_loss: 8.7127e-04 - val_mean_squared_error: 8.7127e-04 - val_acc: 0.8248
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8002 - val_loss: 8.7123e-04 - val_mean_squared_error: 8.7123e-04 - val_acc: 0.8341
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8049 - val_loss: 8.7723e-04 - val_mean_squared_error: 8.7723e-04 - val_acc: 0.8248
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8008 - val_loss: 8.7109e-04 - val_mean_squared_error: 8.7109e-04 - val_acc: 0.8411
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.7996 - val_loss: 8.6897e-04 - val_mean_squared_error: 8.6897e-04 - val_acc: 0.8224
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8020 - val_loss: 8.8018e-04 - val_mean_squared_error: 8.8018e-04 - val_acc: 0.8201
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8020 - val_loss: 8.7015e-04 - val_mean_squared_error: 8.7015e-04 - val_acc: 0.8364
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8037 - val_loss: 8.6978e-04 - val_mean_squared_error: 8.6978e-04 - val_acc: 0.8224
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8037 - val_loss: 8.7308e-04 - val_mean_squared_error: 8.7308e-04 - val_acc: 0.8318
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8090 - val_loss: 8.6901e-04 - val_mean_squared_error: 8.6901e-04 - val_acc: 0.8271
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7973 - val_loss: 8.7333e-04 - val_mean_squared_error: 8.7333e-04 - val_acc: 0.8201
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8119 - val_loss: 8.7213e-04 - val_mean_squared_error: 8.7213e-04 - val_acc: 0.8341
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8172 - val_loss: 8.6912e-04 - val_mean_squared_error: 8.6912e-04 - val_acc: 0.8294
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8032 - val_loss: 8.7692e-04 - val_mean_squared_error: 8.7692e-04 - val_acc: 0.8341
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8055 - val_loss: 8.7108e-04 - val_mean_squared_error: 8.7108e-04 - val_acc: 0.8294
Assessing the adamax model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 5s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7996 - val_loss: 9.0338e-04 - val_mean_squared_error: 9.0338e-04 - val_acc: 0.8271
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8055 - val_loss: 9.2424e-04 - val_mean_squared_error: 9.2424e-04 - val_acc: 0.8224
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8037 - val_loss: 9.0566e-04 - val_mean_squared_error: 9.0566e-04 - val_acc: 0.8201
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8107 - val_loss: 8.9716e-04 - val_mean_squared_error: 8.9716e-04 - val_acc: 0.8084
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8113 - val_loss: 9.2288e-04 - val_mean_squared_error: 9.2288e-04 - val_acc: 0.8131
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.7973 - val_loss: 8.8285e-04 - val_mean_squared_error: 8.8285e-04 - val_acc: 0.8178
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8166 - val_loss: 8.7575e-04 - val_mean_squared_error: 8.7575e-04 - val_acc: 0.8318
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8032 - val_loss: 9.1585e-04 - val_mean_squared_error: 9.1585e-04 - val_acc: 0.7991
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8131 - val_loss: 8.9176e-04 - val_mean_squared_error: 8.9176e-04 - val_acc: 0.8061
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8037 - val_loss: 8.8525e-04 - val_mean_squared_error: 8.8525e-04 - val_acc: 0.8178
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8061 - val_loss: 9.0986e-04 - val_mean_squared_error: 9.0986e-04 - val_acc: 0.8061
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8002 - val_loss: 8.7879e-04 - val_mean_squared_error: 8.7879e-04 - val_acc: 0.8248
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8037 - val_loss: 9.1269e-04 - val_mean_squared_error: 9.1269e-04 - val_acc: 0.8131
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.7938 - val_loss: 8.8821e-04 - val_mean_squared_error: 8.8821e-04 - val_acc: 0.8364
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.7944 - val_loss: 8.8916e-04 - val_mean_squared_error: 8.8916e-04 - val_acc: 0.8131
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7996 - val_loss: 8.8953e-04 - val_mean_squared_error: 8.8953e-04 - val_acc: 0.8014
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8102 - val_loss: 8.9809e-04 - val_mean_squared_error: 8.9809e-04 - val_acc: 0.8224
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8072 - val_loss: 8.8880e-04 - val_mean_squared_error: 8.8880e-04 - val_acc: 0.8154
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8084 - val_loss: 9.3093e-04 - val_mean_squared_error: 9.3093e-04 - val_acc: 0.8131
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8137 - val_loss: 8.7037e-04 - val_mean_squared_error: 8.7037e-04 - val_acc: 0.8294
Assessing the nadam model
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 6s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.8026 - val_loss: 9.9994e-04 - val_mean_squared_error: 9.9994e-04 - val_acc: 0.7874
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7991 - val_loss: 9.9782e-04 - val_mean_squared_error: 9.9782e-04 - val_acc: 0.8271
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7792 - val_loss: 9.9141e-04 - val_mean_squared_error: 9.9141e-04 - val_acc: 0.8084
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7915 - val_loss: 9.9944e-04 - val_mean_squared_error: 9.9944e-04 - val_acc: 0.8014
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7926 - val_loss: 9.5947e-04 - val_mean_squared_error: 9.5947e-04 - val_acc: 0.8271
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8119 - val_loss: 0.0012 - val_mean_squared_error: 0.0012 - val_acc: 0.7757
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7938 - val_loss: 9.8041e-04 - val_mean_squared_error: 9.8041e-04 - val_acc: 0.7944
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7821 - val_loss: 9.5798e-04 - val_mean_squared_error: 9.5798e-04 - val_acc: 0.8341
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7938 - val_loss: 9.6309e-04 - val_mean_squared_error: 9.6309e-04 - val_acc: 0.8014
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0014 - mean_squared_error: 0.0014 - acc: 0.7944 - val_loss: 9.3796e-04 - val_mean_squared_error: 9.3796e-04 - val_acc: 0.8131
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7956 - val_loss: 9.3796e-04 - val_mean_squared_error: 9.3796e-04 - val_acc: 0.8248
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7991 - val_loss: 9.4375e-04 - val_mean_squared_error: 9.4375e-04 - val_acc: 0.8248
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7944 - val_loss: 9.4266e-04 - val_mean_squared_error: 9.4266e-04 - val_acc: 0.8061
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8061 - val_loss: 9.3668e-04 - val_mean_squared_error: 9.3668e-04 - val_acc: 0.8037
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7938 - val_loss: 9.7807e-04 - val_mean_squared_error: 9.7807e-04 - val_acc: 0.7687
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8178 - val_loss: 9.1250e-04 - val_mean_squared_error: 9.1250e-04 - val_acc: 0.8224
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8002 - val_loss: 9.9777e-04 - val_mean_squared_error: 9.9777e-04 - val_acc: 0.8154
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7891 - val_loss: 9.5925e-04 - val_mean_squared_error: 9.5925e-04 - val_acc: 0.8131
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8055 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.7850
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8043 - val_loss: 0.0010 - val_mean_squared_error: 0.0010 - val_acc: 0.8131
In [64]:
## Performance of other parameters is not visible due to SGD's poor performance. Capping y axis to 0.5 to visualize in better detail the fluctuation accross epochs
legend_optimizers = []
plt.figure(figsize=(20,20))
for opt_key, opt_value in mse_dict2.items(): #items make the dict iterable
    legend_optimizers.append(opt_key)
    plt.plot(opt_value)
plt.title('Mean Squared Error of Optimizers')
plt.ylim(0,0.005) # Capping y axis at 0.05 becuase cannot visualize other parameters
# Scratch that, need to cap it at 0.003 because rmsprop has an excessively high value. 
plt.xlabel('epochs')
plt.ylabel('mse')
plt.legend(legend_optimizers, loc='upper right', fontsize= 20)
plt.show()
In [63]:
## Interesting. Under this architecture, adadelta yields the most optimal output. 
## We save this to the new model, labeled model2. 
## TODO: Save the model as model.h5

model2.compile(loss='mean_squared_error', optimizer = 'adadelta', metrics=['mse', 'acc'])
# Batch size = 8 to be half the size of possible features input (16 as init input)
hist2 = model2.fit(X_train, y_train, validation_split=0.2, epochs=20, batch_size=8, verbose=1)
model2.save('my_model.h5')
plot_history(hist2)
Train on 1712 samples, validate on 428 samples
Epoch 1/20
1712/1712 [==============================] - 6s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8178 - val_loss: 8.8385e-04 - val_mean_squared_error: 8.8385e-04 - val_acc: 0.8271
Epoch 2/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8131 - val_loss: 8.8222e-04 - val_mean_squared_error: 8.8222e-04 - val_acc: 0.8224
Epoch 3/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8213 - val_loss: 8.8969e-04 - val_mean_squared_error: 8.8969e-04 - val_acc: 0.8248
Epoch 4/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.7932 - val_loss: 8.8301e-04 - val_mean_squared_error: 8.8301e-04 - val_acc: 0.8248
Epoch 5/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8061 - val_loss: 8.8377e-04 - val_mean_squared_error: 8.8377e-04 - val_acc: 0.8294
Epoch 6/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8178 - val_loss: 8.8312e-04 - val_mean_squared_error: 8.8312e-04 - val_acc: 0.8294
Epoch 7/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8148 - val_loss: 8.8626e-04 - val_mean_squared_error: 8.8626e-04 - val_acc: 0.8248
Epoch 8/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8061 - val_loss: 8.8357e-04 - val_mean_squared_error: 8.8357e-04 - val_acc: 0.8224
Epoch 9/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8084 - val_loss: 8.8646e-04 - val_mean_squared_error: 8.8646e-04 - val_acc: 0.8224
Epoch 10/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8178 - val_loss: 8.7818e-04 - val_mean_squared_error: 8.7818e-04 - val_acc: 0.8201
Epoch 11/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8026 - val_loss: 8.8100e-04 - val_mean_squared_error: 8.8100e-04 - val_acc: 0.8201
Epoch 12/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8201 - val_loss: 8.8091e-04 - val_mean_squared_error: 8.8091e-04 - val_acc: 0.8224
Epoch 13/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8107 - val_loss: 8.8522e-04 - val_mean_squared_error: 8.8522e-04 - val_acc: 0.8178
Epoch 14/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8090 - val_loss: 8.7899e-04 - val_mean_squared_error: 8.7899e-04 - val_acc: 0.8178
Epoch 15/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8148 - val_loss: 8.8852e-04 - val_mean_squared_error: 8.8852e-04 - val_acc: 0.8201
Epoch 16/20
1712/1712 [==============================] - 3s - loss: 0.0013 - mean_squared_error: 0.0013 - acc: 0.8067 - val_loss: 8.8833e-04 - val_mean_squared_error: 8.8833e-04 - val_acc: 0.8248
Epoch 17/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8113 - val_loss: 8.8029e-04 - val_mean_squared_error: 8.8029e-04 - val_acc: 0.8294
Epoch 18/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8143 - val_loss: 8.7654e-04 - val_mean_squared_error: 8.7654e-04 - val_acc: 0.8248
Epoch 19/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8090 - val_loss: 8.8538e-04 - val_mean_squared_error: 8.8538e-04 - val_acc: 0.8201
Epoch 20/20
1712/1712 [==============================] - 3s - loss: 0.0012 - mean_squared_error: 0.0012 - acc: 0.8008 - val_loss: 8.8047e-04 - val_mean_squared_error: 8.8047e-04 - val_acc: 0.8294
dict_keys(['val_loss', 'loss', 'val_acc', 'acc', 'val_mean_squared_error', 'mean_squared_error'])

Visualize a Subset of the Test Predictions

Execute the code cell below to visualize your model's predicted keypoints on a subset of the testing images.

In [65]:
# Replace with model2 ( Actually the 4th attempt) since this was the best in my case. 
y_test = model2.predict(X_test)
fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_test[i], y_test[i], ax)

Step 8: Complete the pipeline

With the work you did in Sections 1 and 2 of this notebook, along with your freshly trained facial keypoint detector, you can now complete the full pipeline. That is given a color image containing a person or persons you can now

  • Detect the faces in this image automatically using OpenCV
  • Predict the facial keypoints in each face detected in the image
  • Paint predicted keypoints on each face detected

In this Subsection you will do just this!

(IMPLEMENTATION) Facial Keypoints Detector

Use the OpenCV face detection functionality you built in previous Sections to expand the functionality of your keypoints detector to color images with arbitrary size. Your function should perform the following steps

  1. Accept a color image.
  2. Convert the image to grayscale.
  3. Detect and crop the face contained in the image.
  4. Locate the facial keypoints in the cropped image.
  5. Overlay the facial keypoints in the original (color, uncropped) image.

Note: step 4 can be the trickiest because remember your convolutional network is only trained to detect facial keypoints in $96 \times 96$ grayscale images where each pixel was normalized to lie in the interval $[0,1]$, and remember that each facial keypoint was normalized during training to the interval $[-1,1]$. This means - practically speaking - to paint detected keypoints onto a test face you need to perform this same pre-processing to your candidate face - that is after detecting it you should resize it to $96 \times 96$ and normalize its values before feeding it into your facial keypoint detector. To be shown correctly on the original image the output keypoints from your detector then need to be shifted and re-normalized from the interval $[-1,1]$ to the width and height of your detected face.

When complete you should be able to produce example images like the one below

In [166]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')


# Step 1: Convert the image to RGB colorspace
image_copy = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

#Step 2: Convert to Grayscale
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.25, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image_copy)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    
# plot our image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image copy with Face Detection')
ax1.imshow(image_with_detections)
Number of faces detected: 2
Out[166]:
<matplotlib.image.AxesImage at 0x7fd402feeb00>
In [181]:
### TODO: Use the face detection code we saw in Section 1 with your trained conv-net 
# Step 3: Detect and crop the face contained in the image.
faceArray = []

# Realize need to preprocess the CNN input becuase of this error: 
# expected conv2d_15_input to have 4 dimensions, but got array with shape (2, 96, 96)
face_forCNN = np.ndarray(shape=(1,96,96,1), dtype=float)
for (x,y,w,h) in faces:
    face = gray[y:y+h,x:x+h]
    face_resize = cv2.resize(face, (96,96)) # convert to correct size
    # Normalize and make friendly for CNN. This reshape screwed me for half an hour... 
    face_resize = (face_resize.reshape(96,96,1))/255 
    #print("face resize dimensions == {}".format(face_resize.shape))
    faceArray.append(face_resize) #Normalize to 255 BW scale. 


fig = plt.figure(figsize= (20,20)) #Init dimensions taken from test in above example. 
for f in range(len(faceArray)):
    # Face Sample must be squeezed becuase we augmented the image for the tensor in our CNN. 
    face_sample = np.squeeze(faceArray[f]) 
    #Init dimensions taken from test in above example.
    ax2.set_title('Obamas in Grayscale')
    ax2 = fig.add_subplot(3,3,f + 1, xticks=[], yticks=[])
    ax2.imshow(face_sample, cmap='gray') 

#print("face array before == {}".format(faceArray))
# Reason: Before it was a list, had to transform whole list into array so could be accepted by NN. 
# Needed to fiddle with casting faceArray back into an Array
#print("X obama after  == {}".format(X_obama.shape))
X_obama = np.array(faceArray)
y_obama = model2.predict(X_obama)

fig5 = plt.figure(figsize=(20,20))
fig5.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for face in range(len(faces)):
    ax5 = fig5.add_subplot(3, 3, face + 1, xticks=[], yticks=[])
    ax5.set_title('Obamas with Keypoints')
    plot_data(X_obama[face], y_obama[face], ax5)
In [182]:
#Step 5: Present the Image in Color, need to Re-Normalize!! 
fig = plt.figure(figsize = (9,9))
ax10 = fig.add_subplot(111)
ax10.set_xticks([])
ax10.set_yticks([])
ax10.set_title('Source Image with Face Keypoints')

for face in range(len(faces)):
    x,y,w,h = faces[face]
    # Linear combination normalization must be undone
    x_keypoints, y_keypoints = y_obama[face][0::2] * 48 + 48, y_obama[face][1::2] * 48 + 48
    x_resize, y_resize = ((w*x_keypoints)/96) + x, ((h*y_keypoints)/96) +y
    ax10.scatter(x_resize, y_resize, c='lawngreen', marker='.', s=30)
# plot our image

ax10.imshow(image_with_detections)
Out[182]:
<matplotlib.image.AxesImage at 0x7fd402a1bdd8>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add facial keypoint detection to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for keypoint detection and marking in the previous exercise and you should be good to go!

In [ ]:
import cv2
import time 
from keras.models import load_model
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # keep video stream open
    while rval:
        # plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)
        
        # exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # exit by pressing any key
            # destroy windows
            cv2.destroyAllWindows()
            
            # hack from stack overflow for making sure window closes on osx --> https://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()  
In [ ]:
# Run your keypoint face painter
laptop_camera_go()

(Optional) Further Directions - add a filter using facial keypoints

Using your freshly minted facial keypoint detector pipeline you can now do things like add fun filters to a person's face automatically. In this optional exercise you can play around with adding sunglasses automatically to each individual's face in an image as shown in a demonstration image below.

To produce this effect an image of a pair of sunglasses shown in the Python cell below.

In [ ]:
# Load in sunglasses image - note the usage of the special option
# cv2.IMREAD_UNCHANGED, this option is used because the sunglasses 
# image has a 4th channel that allows us to control how transparent each pixel in the image is
sunglasses = cv2.imread("images/sunglasses_4.png", cv2.IMREAD_UNCHANGED)

# Plot the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.imshow(sunglasses)
ax1.axis('off');

This image is placed over each individual's face using the detected eye points to determine the location of the sunglasses, and eyebrow points to determine the size that the sunglasses should be for each person (one could also use the nose point to determine this).

Notice that this image actually has 4 channels, not just 3.

In [ ]:
# Print out the shape of the sunglasses image
print ('The sunglasses image has shape: ' + str(np.shape(sunglasses)))

It has the usual red, blue, and green channels any color image has, with the 4th channel representing the transparency level of each pixel in the image. Here's how the transparency channel works: the lower the value, the more transparent the pixel will become. The lower bound (completely transparent) is zero here, so any pixels set to 0 will not be seen.

This is how we can place this image of sunglasses on someone's face and still see the area around of their face where the sunglasses lie - because these pixels in the sunglasses image have been made completely transparent.

Lets check out the alpha channel of our sunglasses image in the next Python cell. Note because many of the pixels near the boundary are transparent we'll need to explicitly print out non-zero values if we want to see them.

In [ ]:
# Print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('the alpha channel here looks like')
print (alpha_channel)

# Just to double check that there are indeed non-zero values
# Let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('\n the non-zero values of the alpha channel look like')
print (values)

This means that when we place this sunglasses image on top of another image, we can use the transparency channel as a filter to tell us which pixels to overlay on a new image (only the non-transparent ones with values greater than zero).

One last thing: it's helpful to understand which keypoint belongs to the eyes, mouth, etc. So, in the image below, we also display the index of each facial keypoint directly on the image so that you can tell which keypoints are for the eyes, eyebrows, etc.

With this information, you're well on your way to completing this filtering task! See if you can place the sunglasses automatically on the individuals in the image loaded in / shown in the next Python cell.

In [ ]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


# Plot the image
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
In [ ]:
## (Optional) TODO: Use the face detection code we saw in Section 1 with your trained conv-net to put
## sunglasses on the individuals in our test image

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add the sunglasses filter to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for adding sunglasses to someone's face in the previous optional exercise and you should be good to go!

In [ ]:
import cv2
import time 
from keras.models import load_model
import numpy as np

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        # Plot image from camera with detections marked
        cv2.imshow("face detection activated", frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key > 0: # exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [ ]:
# Load facial landmark detector model
model = load_model('my_model.h5')

# Run sunglasses painter
laptop_camera_go()